198 research outputs found
Revealing User Familiarity Bias in Task-Oriented Dialogue via Interactive Evaluation
Most task-oriented dialogue (TOD) benchmarks assume users that know exactly
how to use the system by constraining the user behaviors within the system's
capabilities via strict user goals, namely "user familiarity" bias. This data
bias deepens when it combines with data-driven TOD systems, as it is impossible
to fathom the effect of it with existing static evaluations. Hence, we conduct
an interactive user study to unveil how vulnerable TOD systems are against
realistic scenarios. In particular, we compare users with 1) detailed goal
instructions that conform to the system boundaries (closed-goal) and 2) vague
goal instructions that are often unsupported but realistic (open-goal). Our
study reveals that conversations in open-goal settings lead to catastrophic
failures of the system, in which 92% of the dialogues had significant issues.
Moreover, we conduct a thorough analysis to identify distinctive features
between the two settings through error annotation. From this, we discover a
novel "pretending" behavior, in which the system pretends to handle the user
requests even though they are beyond the system's capabilities. We discuss its
characteristics and toxicity while emphasizing transparency and a fallback
strategy for robust TOD systems
EvalLM: Interactive Evaluation of Large Language Model Prompts on User-Defined Criteria
By simply composing prompts, developers can prototype novel generative
applications with Large Language Models (LLMs). To refine prototypes into
products, however, developers must iteratively revise prompts by evaluating
outputs to diagnose weaknesses. Formative interviews (N=8) revealed that
developers invest significant effort in manually evaluating outputs as they
assess context-specific and subjective criteria. We present EvalLM, an
interactive system for iteratively refining prompts by evaluating multiple
outputs on user-defined criteria. By describing criteria in natural language,
users can employ the system's LLM-based evaluator to get an overview of where
prompts excel or fail, and improve these based on the evaluator's feedback. A
comparative study (N=12) showed that EvalLM, when compared to manual
evaluation, helped participants compose more diverse criteria, examine twice as
many outputs, and reach satisfactory prompts with 59% fewer revisions. Beyond
prompts, our work can be extended to augment model evaluation and alignment in
specific application contexts
Aligning Large Language Models through Synthetic Feedback
Aligning large language models (LLMs) to human values has become increasingly
important as it enables sophisticated steering of LLMs, e.g., making them
follow given instructions while keeping them less toxic. However, it requires a
significant amount of human demonstrations and feedback. Recently, open-sourced
models have attempted to replicate the alignment learning process by distilling
data from already aligned LLMs like InstructGPT or ChatGPT. While this process
reduces human efforts, constructing these datasets has a heavy dependency on
the teacher models. In this work, we propose a novel framework for alignment
learning with almost no human labor and no dependency on pre-aligned LLMs.
First, we perform reward modeling (RM) with synthetic feedback by contrasting
responses from vanilla LLMs with various sizes and prompts. Then, we use the RM
for simulating high-quality demonstrations to train a supervised policy and for
further optimizing the model with reinforcement learning. Our resulting model,
Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms
open-sourced models, including Alpaca, Dolly, and OpenAssistant, which are
trained on the outputs of InstructGPT or human-annotated instructions. Our
7B-sized model outperforms the 12-13B models in the A/B tests using GPT-4 as
the judge with about 75% winning rate on average.Comment: Preprint, 9 pages (with 10 pages of supplementary
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Language models (LMs) with less than 100B parameters are known to perform
poorly on chain-of-thought (CoT) reasoning in contrast to large LMs when
solving unseen tasks. In this work, we aim to equip smaller LMs with the
step-by-step reasoning capability by instruction tuning with CoT rationales. In
order to achieve this goal, we first introduce a new instruction-tuning dataset
called the CoT Collection, which augments the existing Flan Collection
(including only 9 CoT tasks) with additional 1.84 million rationales across
1,060 tasks. We show that CoT fine-tuning Flan-T5 (3B & 11B) with CoT
Collection enables smaller LMs to have better CoT capabilities on unseen tasks.
On the BIG-Bench-Hard (BBH) benchmark, we report an average improvement of
+4.34% (Flan-T5 3B) and +2.60% (Flan-T5 11B), in terms of zero-shot task
accuracy. Furthermore, we show that instruction tuning with CoT Collection
allows LMs to possess stronger few-shot learning capabilities on 4
domain-specific tasks, resulting in an improvement of +2.24% (Flan-T5 3B) and
+2.37% (Flan-T5 11B), even outperforming ChatGPT utilizing demonstrations until
the max length by a +13.98% margin. Our code, the CoT Collection data, and
model checkpoints are publicly available.Comment: EMNLP 2023 (Main Conference
- …